翻訳と辞書
Words near each other
・ Amedeus Alexander of Savoy
・ Amedeus Msarikie
・ Amedia
・ Amedichi River
・ Amedisys Home Health and Hospice Care
・ Amedroz Lake
・ Amedy Coulibaly
・ Amedzofe
・ Amedzofe (history)
・ Amedzofe, Ghana
・ Amedée Roy Stadium
・ Amdad
・ Amdahl
・ Amdahl Corporation
・ Amdahl UTS
Amdahl's law
・ Amdal
・ Amdalai
・ Amdalaye
・ Amdang
・ Amdang language
・ Amdang people
・ Amdanga (community development block)
・ Amdanga (Vidhan Sabha constituency)
・ Amdarch
・ Amdavad ni Gufa
・ Amde Werq
・ Amden
・ Amden Formation
・ Amderma


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Amdahl's law : ウィキペディア英語版
Amdahl's law

In computer architecture, Amdahl's law (or Amdahl's argument) gives the theoretical speedup in latency of the execution of a task ''at fixed workload'' that can be expected of a system whose resources are improved. It is named after computer scientist Gene Amdahl, and was presented at the AFIPS Spring Joint Computer Conference in 1967.
Amdahl's law can be formulated the following way:
: S_\text(s) = \frac},
where
* ''S''latency is the theoretical speedup in latency of the execution of the whole task;
* ''s'' is the speedup in latency of the execution of the part of the task that benefits from the improvement of the resources of the system;
* ''p'' is the percentage of the execution time of the whole task concerning the part that benefits from the improvement of the resources of the system ''before the improvement''.
Furthermore,
:
\begin
S_\text(s) \leq \frac \\
\lim_ S_\text(s) = \frac.
\end

show that the theoretical speedup of the execution of the whole task increases with the improvement of the resources of the system and that regardless the magnitude of the improvement, the theoretical speedup is always limited by the part of the task that cannot benefit from the improvement.
Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours using a single processor core, and a particular part of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours () of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence, the theoretical speedup is limited to at most 20 times (). For this reason parallel computing is relevant only for a low number of processors and very parallelizable programs.
==Derivation==
A task executed by a system whose resources are improved compared to an initial similar system can be split up into two parts:
* a part that does not benefit from the improvement of the resources of the system;
* a part that benefits from the improvement of the resources of the system.
''Example.'' — A computer program that processes files from disk. A part of that program may scan the directory of the disk and create a list of files internally in memory. After that, another part of the program passes each file to a separate thread for processing. The part that scans the directory and creates the file list cannot be sped up on a parallel computer, but the part that processes the files can.
The execution time of the whole task before the improvement of the resources of the system is denoted ''T''. It includes the execution time of the part that does not benefit from the improvement of the resources and the execution time of the one that benefits from it. The percentage of the execution time of the whole task concerning the part that benefits from the improvement of the resources ''before the improvement'' is denoted ''p''. The one concerning the part that does not benefit from it is therefore . Then
: T = (1 - p)T + pT.
It is the execution of the part that benefits from the improvement of the resources that is sped up by the factor ''s'' after the improvement of the resources. Consequently, the execution time of the part that does not benefit from it remains the same, while that of the part that benefits from it becomes
: \fracT.
The theoretical execution time ''T''(''s'') of the whole task after the improvement of the resources is then
: T(s) = (1 - p)T + \fracT.
Amdahl's law gives the theoretical speedup in latency of the execution of the whole task ''at fixed workload W'', which yields
: S_\text(s) = \frac = \frac = \frac}.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Amdahl's law」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.